posted 01-22-2011 02:48 PM
It is probably time to start thinking about updating this old study.Ansely (1990) claims to have reported all studies in the last decade (1980s). First, he did not report all studies. Two conspicuous missing pieces are Kircher and Raskin (1988) regarding single issue ZCT exams, and Barland Honts & Barger (1989) regarding multi-issue screening. Second, many of them seem to have perfect or near perfect accuracy.
Just what exactly has happened since the 1980s that has caused the polygraph to become less accurate???
Continuing to claim accuracy levels of 98% seems to gets us some snickers and incredulous responses from scientific folks who have read the NRC study and other published literature.
Another obvious piece of slight-of-hand fakery will be any attempt to schlep these 98% accuracy figures onto screening exams.
Anyone who provides evidence that is Too-Good-To-Be-Truer, in the form of perfect or near perfect results should be subject to scrutiny. Not just in polygraph. Other fields of science have their share of problems too, when they want something so badly - due to need or financial incentives - that they don't take the time to check and re-check all the evidence and all the alternative views of the available evidence until it they are already embarrassed. This is exactly why we need to insist that PDD test developers show their validation data, and not just their results.
People with more objectivity doesn't seem to agree or find the same kind of near perfect results. They do find that polygraph accuracy is very good, just not near perfect. In reality, criterion validity of polygraph results seems to be about on par with other good tests in other fields of science.
The folks in Utah achieved accuracy in the .90s by sticking to the principles that were supported by data and keeping the test focused on a set of questions that they interpret as a single issue. Of course, multi-issue screening exams are mathematically complicated and will have weaker criterion accuracy. More issues = less measurement and observation of each issue, + more opportunities for inconclusive or erroneous results. Its just math.
A more rigorous survey of validated techniques in 2006 showed polygraph accuracy in the same range of upper .80s to low .90s.
So, if what we want is something old to help with short-term marketing, then may be the old reports are helpful.
If what we want is to be taken seriously by smart critics and ensure our long-term usefulness, then citing the old overly optimistic stuff may only distance and marginalize us among academics and critics in the forensic sciences.
Polygraph is a good test, and it is probably possible to increase its effectiveness in some incremental ways. But we will do a better job helping ourselves by keeping it real.
.02
r
------------------
"Gentlemen, you can't fight in here. This is the war room."
--(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)